Neural models that do not rely on pre-training have excelled in the keyphrase generation task with large annotated datasets. Meanwhile, new approaches have incorporated pre-trained language models (PLMs) for their data efficiency. However, there lacks a systematic study of how the two types of approaches compare and how different design choices can affect the performance of PLM-based models. To fill in this knowledge gap and facilitate a more informed use of PLMs for keyphrase extraction and keyphrase generation, we present an in-depth empirical study. Formulating keyphrase extraction as sequence labeling and keyphrase generation as sequence-to-sequence generation, we perform extensive experiments in three domains. After showing that PLMs have competitive high-resource performance and state-of-the-art low-resource performance, we investigate important design choices including in-domain PLMs, PLMs with different pre-training objectives, using PLMs with a parameter budget, and different formulations for present keyphrases. Further results show that (1) in-domain BERT-like PLMs can be used to build strong and data-efficient keyphrase generation models; (2) with a fixed parameter budget, prioritizing model depth over width and allocating more layers in the encoder leads to better encoder-decoder models; and (3) introducing four in-domain PLMs, we achieve a competitive performance in the news domain and the state-of-the-art performance in the scientific domain.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
建模用户从历史行为中的动态偏好在于现代推荐系统的核心。由于用户兴趣的多样性,最近的进步建议多功能网络将历史行为编码为多个兴趣向量。在实际情况下,通常会一起检索相应的捕获兴趣项目,以获取曝光并收集到培训数据中,从而产生兴趣之间的依赖性。不幸的是,多息网络可能错误地集中在被捕获的利益之间的微妙依赖性上。被这些依赖性误导了,捕获了无关的利益和目标之间的虚假相关性,从而导致训练和测试分布不匹配时预测结果不稳定。在本文中,我们介绍了广泛使用的Hilbert-Schmidt独立标准(HSIC)来衡量被捕获的利益之间的独立性程度,并经验表明,HSIC的持续增加可能会损害模型性能。基于此,我们提出了一个新颖的多息网络,称为深稳定的多功能学习(Desmil),该网络试图通过学习权重以训练样本的学习权重消除捕获的兴趣中微妙的依赖性的影响因果关系。我们对公共建议数据集,大规模工业数据集和合成数据集进行了广泛的实验,这些数据集模拟了分布数据的数据集。实验结果表明,我们提出的Desmil的表现优于最先进的模型。此外,我们还进行了全面的模型分析,以揭示Desmil在一定程度上工作的原因。
translated by 谷歌翻译
关节2D心脏分割和3D体积重建是建立统计心脏解剖模型的基础,并了解运动模式的功能机制。但是,由于CINE MR和高主体间方差的平面分辨率低,精确分割心脏图像并重建3D体积是具有挑战性的。在这项研究中,我们提出了一个基于潜在空间的端到端框架DeepRecon,该框架会产生多个临床上基本的结果,包括准确的图像分割,合成高分辨率3D图像和3D重建体积。我们的方法确定了Cine图像的最佳潜在表示,其中包含心脏结构的准确语义信息。特别是,我们的模型共同生成具有准确的语义信息的合成图像,并使用最佳潜在表示对心脏结构进行分割。我们进一步探索了3D形状重建和4D运动模式通过不同的潜在空间操纵策略进行适应的下游应用。同时生成的高分辨率图像具有评估心脏形状和运动的高可解释价值。实验性结果证明了我们的有效性在多个方面的方法,包括2D分割,3D重建,下游4D运动模式适应性。
translated by 谷歌翻译
With the development of machine learning and data science, data sharing is very common between companies and research institutes to avoid data scarcity. However, sharing original datasets that contain private information can cause privacy leakage. A reliable solution is to utilize private synthetic datasets which preserve statistical information from original datasets. In this paper, we propose MC-GEN, a privacy-preserving synthetic data generation method under differential privacy guarantee for machine learning classification tasks. MC-GEN applies multi-level clustering and differential private generative model to improve the utility of synthetic data. In the experimental evaluation, we evaluated the effects of parameters and the effectiveness of MC-GEN. The results showed that MC-GEN can achieve significant effectiveness under certain privacy guarantees on multiple classification tasks. Moreover, we compare MC-GEN with three existing methods. The results showed that MC-GEN outperforms other methods in terms of utility.
translated by 谷歌翻译
组合来自多视图图像的信息对于提高自动化方法的疾病诊断方法的性能和鲁棒性至关重要。但是,由于多视图图像的非对齐特性,跨视图的构建相关性和数据融合在很大程度上仍然是一个开放的问题。在这项研究中,我们提出了输血,这是一种基于变压器的体系结构,可使用卷积层和强大的注意机制合并不同的多视图成像信息。特别是,针对丰富的跨视图上下文建模和语义依赖性挖掘,提出了发散的融合注意(DIFA)模块,以解决从不同图像视图中捕获未对齐数据之间的长期相关性的关键问题。我们进一步提出了多尺度注意(MSA),以收集多尺度特征表示的全局对应关系。我们评估了心脏MRI(M \&MS-2)挑战队列中多疾病,多视图\&多中心右心室分段的输血。输血表明了针对最先进方法的领先绩效,并为多视图成像集成的新观点打开了稳健的医学图像分割。
translated by 谷歌翻译
在不同观点之间找到准确的对应关系是无监督的多视图立体声(MVS)的跟腱。现有方法是基于以下假设:相应的像素具有相似的光度特征。但是,在实际场景中,多视图图像观察到非斜面的表面和经验遮挡。在这项工作中,我们提出了一种新颖的方法,即神经渲染(RC-MVSNET),以解决观点之间对应关系的歧义问题。具体而言,我们施加了一个深度渲染一致性损失,以限制靠近对象表面的几何特征以减轻遮挡。同时,我们引入了参考视图综合损失,以产生一致的监督,即使是针对非兰伯特表面。关于DTU和TANKS \&Temples基准测试的广泛实验表明,我们的RC-MVSNET方法在无监督的MVS框架上实现了最先进的性能,并对许多有监督的方法进行了竞争性能。该代码在https://github.com/上发布。 BOESE0601/RC-MVSNET
translated by 谷歌翻译
自上而下的实例分割框架与自下而上的框架相比,它在对象检测方面表现出了优越性。虽然它有效地解决了过度细分,但自上而下的实例分割却遭受了过度处理问题。然而,完整的分割掩模对于生物图像分析至关重要,因为它具有重要的形态特性,例如形状和体积。在本文中,我们提出了一个区域建议纠正(RPR)模块,以解决这个具有挑战性的分割问题。特别是,我们提供了一个渐进式皇家模块,以逐渐将邻居信息引入一系列ROI。 ROI功能被馈入专门的进料网络(FFN)以进行提案框回归。有了其他邻居信息,提出的RPR模块显示了区域建议位置的校正显着改善,因此与最先进的基线方法相比,在三个生物图像数据集上表现出有利的实例分割性能。实验结果表明,所提出的RPR模块在基于锚固的和无锚的自上而下实例分割方法中有效,这表明该方法可以应用于生物学图像的一般自上而下实例分割。代码可用。
translated by 谷歌翻译
旨在估算每个广告接触点在转换旅程中的贡献的多点触摸归因(MTA)对于预算分配和自动广告至关重要。现有方法首先训练模型,以通过历史数据来预测广告旅程的转换概率,并使用反事实预测来计算每个接触点的归因。这些作品的假设是转换预测模型是公正的,即,它可以对任何随机分配的旅程(包括事实和反事实)提供准确的预测。然而,由于根据用户偏好推荐裸露的广告,因此这个假设并不总是存在。用户的这种混杂偏见将导致反事实预测中的分布(OOD)问题,并导致归因中的概念漂移。在本文中,我们定义了因果MTA任务,并提出Causalmta来消除用户偏好的影响。它从系统地消除了静态和动态偏好的混杂偏见,以使用历史数据来学习转换预测模型。我们还提供理论分析,以证明Causalmta可以学习具有足够数据的无偏见模型。电子商务公司的公共数据集和印象数据的广泛实验表明,Causalmta不仅比最先进的方法实现了更好的预测性能,而且还可以在不同的广告渠道上产生有意义的属性信用。
translated by 谷歌翻译
具有已知相机参数的多视图立体声(MVS)基本上是有效深度范围内的1D搜索问题。最近的基于深度学习的MVS方法通常在深度范围内密集地样本深度假设,然后构造对深度预测的预测存储器消耗的3D成本卷。虽然粗细的抽样策略在一定程度上缓解了这个开销问题,但MVS的效率仍然是一个开放的挑战。在这项工作中,我们提出了一种用于高效MV的新方法,其显着降低了内存足迹,同时明显推进最先进的深度预测性能。考虑到效率和有效性,我们调查搜索策略可以合理地最佳地最佳。我们首先将MVS制定为二进制搜索问题,因此提出了用于MV的广义二进制搜索网络。具体地,在每个步骤中,深度范围被分成2个箱,两侧具有额外的1个误差容差箱。执行分类以确定哪个箱包含真实深度。我们还将三种机制分别设计为分别处理分类错误,处理超出范围的样本并降低培训记忆。新配方使我们的方法仅在每个步骤中示出非常少量的深度假设,这是高度记忆效率,并且还极大地促进了快速训练收敛。竞争力基准的实验表明,我们的方法达到了最先进的准确性,内存要少得多。特别是,我们的方法在DTU数据集中获得0.289的总分,并在所有基于学习的方法中排列在具有挑战性的坦克和寺庙高级数据集上的第一名。训练有素的型号和代码将在https://github.com/mizhenxing/gbi-net发布。
translated by 谷歌翻译